ai therapist
My friends in Italy are using AI therapists. But is that so bad, when a stigma surrounds mental health? Viola Di Grado
An estimated 5 million Italians are in need of mental health support but are unable to afford it. An estimated 5 million Italians are in need of mental health support but are unable to afford it. My friends in Italy are using AI therapists. But is that so bad, when a stigma surrounds mental health? State provision for psychological health services is lamentable.
- North America > United States (0.15)
- Oceania > Australia (0.06)
- Europe > Italy > Sicily > Palermo (0.05)
- (5 more...)
- Information Technology > Communications > Social Media (0.73)
- Information Technology > Artificial Intelligence > Applied AI (0.71)
The ascent of the AI therapist
Four new books grapple with a global mental-health crisis and the dawn of algorithmic therapy. A technician adjusts the wiring inside the Mark I Perceptron. This early AI system was designed not by a mathematician but by a psychologist. More than a billion people worldwide suffer from a mental-health condition, according to the World Health Organization. The prevalence of anxiety and depression is growing in many demographics, particularly young people, and suicide is claiming hundreds of thousands of lives globally each year. Given the clear demand for accessible and affordable mental-health services, it's no wonder that people have looked to artificial intelligence for possible relief.
- North America > United States > Massachusetts (0.04)
- Asia > China (0.04)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.98)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.75)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
MIRROR: Multimodal Cognitive Reframing Therapy for Rolling with Resistance
Kim, Subin, Kim, Hoonrae, Lee, Jihyun, Jeon, Yejin, Lee, Gary Geunbae
Recent studies have explored the use of large language models (LLMs) in psychotherapy; however, text-based cognitive behavioral therapy (CBT) models often struggle with client resistance, which can weaken therapeutic alliance. To address this, we propose a multimodal approach that incorporates nonverbal cues, which allows the AI therapist to better align its responses with the client's negative emotional state. Specifically, we introduce a new synthetic dataset, Mirror (Multimodal Interactive Rolling with Resistance), which is a novel synthetic dataset that pairs each client's statements with corresponding facial images. Using this dataset, we train baseline vision language models (VLMs) so that they can analyze facial cues, infer emotions, and generate empathetic responses to effectively manage client resistance. These models are then evaluated in terms of both their counseling skills as a therapist, and the strength of therapeutic alliance in the presence of client resistance. Our results demonstrate that Mirror significantly enhances the AI therapist's ability to handle resistance, which outperforms existing text-based CBT approaches. Human expert evaluations further confirm the effectiveness of our approach in managing client resistance and fostering therapeutic alliance.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (4 more...)
Illinois' ban on AI therapy won't stop people from asking chatbots for help
Breakthroughs, discoveries, and DIY tips sent every weekday. Illinois has become the first state to enact legislation banning the use of AI tools like ChatGPT for providing therapy. The bill, signed into law by Governor J.B. Pritzker last Friday, comes amid growing research showing an increase in people experimenting with AI for mental health as the country faces a shortage of access to professional therapy services. The Wellness and Oversight for Psychological Resources Act, officially called HB 1806, prohibits healthcare providers from using AI for therapy and psychotherapy services. Specifically, it prevents AI chatbots or other AI-powered tools from interacting directly with patients, making therapeutic decisions, or creating treatment plans.
- North America > United States > Illinois (0.68)
- North America > United States > New York (0.06)
- North America > United States > Utah (0.05)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Law > Statutes (0.96)
- Government > Regional Government > North America Government > United States Government (0.30)
Conversational Self-Play for Discovering and Understanding Psychotherapy Approaches
Kampman, Onno P, Xing, Michael, Lim, Charmaine, Jabir, Ahmad Ishqi, Louie, Ryan, Lee, Jimmy, Morris, Robert JT
Of particular protein folding, and materials science [1], it interest are deviations from standard approaches, has not been widely applied to understanding effective such as the use of novel therapeutic techniques, new therapy. Large language models (LLMs) are ways to sequence therapeutic techniques within a already used for analyzing, assisting, and replacing conversation, applications of techniques in unusual [2, 3, 4, 5] therapeutic conversations, but these contexts, and/or more adaptive approaches based on efforts primarily replicate known therapeutic approaches client characteristics. What follows is a proof-ofconcept (e.g., Cognitive Behavioral Therapy [CBT] study and a discussion on how AI can serve and Motivational Interviewing [MI]) rather than contribute as a discovery engine for psychotherapy research.
- Asia > Singapore (0.07)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Consumer Health (0.94)
I'm a Therapist, and I'm Replaceable. But So Are You
I'm a psychologist, and AI is coming for my job. The signs are everywhere: a client showing me how ChatGPT helped her better understand her relationship with her parents; a friend ditching her in-person therapist to process anxiety with Claude; a startup raising 40 million to build a super-charged-AI-therapist. The other day on TikTok, I came across an influencer sharing how she doesn't need friends; she can just vent to God and ChatGPT. "ChatGPT talked me out of self-sabotaging." "It knows me better than any human walking this earth."
- North America > United States (0.05)
- Europe > Austria > Vienna (0.05)
Multimodal Cognitive Reframing Therapy via Multi-hop Psychotherapeutic Reasoning
Kim, Subin, Kim, Hoonrae, Do, Heejin, Lee, Gary Geunbae
Previous research has revealed the potential of large language models (LLMs) to support cognitive reframing therapy; however, their focus was primarily on text-based methods, often overlooking the importance of non-verbal evidence crucial in real-life therapy. To alleviate this gap, we extend the textual cognitive reframing to multimodality, incorporating visual clues. Specifically, we present a new dataset called Multi Modal-Cognitive Support Conversation (M2CoSC), which pairs each GPT-4-generated dialogue with an image that reflects the virtual client's facial expressions. To better mirror real psychotherapy, where facial expressions lead to interpreting implicit emotional evidence, we propose a multi-hop psychotherapeutic reasoning approach that explicitly identifies and incorporates subtle evidence. Our comprehensive experiments with both LLMs and vision-language models (VLMs) demonstrate that the VLMs' performance as psychotherapists is significantly improved with the M2CoSC dataset. Furthermore, the multi-hop psychotherapeutic reasoning method enables VLMs to provide more thoughtful and empathetic suggestions, outperforming standard prompting methods.
- Asia > Singapore (0.05)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > South Korea (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
Script-Based Dialog Policy Planning for LLM-Powered Conversational Agents: A Basic Architecture for an "AI Therapist"
Wasenmüller, Robert, Hilbert, Kevin, Benzmüller, Christoph
Large Language Model (LLM)-Powered Conversational Agents have the potential to provide users with scaled behavioral healthcare support, and potentially even deliver full-scale "AI therapy'" in the future. While such agents can already conduct fluent and proactive emotional support conversations, they inherently lack the ability to (a) consistently and reliably act by predefined rules to align their conversation with an overarching therapeutic concept and (b) make their decision paths inspectable for risk management and clinical evaluation -- both essential requirements for an "AI Therapist". In this work, we introduce a novel paradigm for dialog policy planning in conversational agents enabling them to (a) act according to an expert-written "script" that outlines the therapeutic approach and (b) explicitly transition through a finite set of states over the course of the conversation. The script acts as a deterministic component, constraining the LLM's behavior in desirable ways and establishing a basic architecture for an AI Therapist. We implement two variants of Script-Based Dialog Policy Planning using different prompting techniques and synthesize a total of 100 conversations with LLM-simulated patients. The results demonstrate the feasibility of this new technology and provide insights into the efficiency and effectiveness of different implementation variants.
Fine Tuning Large Language Models to Deliver CBT for Depression
Cognitive Behavioral Therapy (CBT) is a well-established, evidence-based treatment for Major Depressive Disorder. Unfortunately, there exist significant barriers to individuals accessing CBT, including cost, scarcity of therapists and stigma. This study explores the feasibility of fine-tuning small open weight large language models (LLMs) to deliver CBT for depression. Using 58 sets of synthetic CBT transcripts generated by the Nous Research fine-tune of Llama 3.1 405b, we fine-tuned three models: Mistral 7b v0.3, Qwen 2.5 7b, and Llama 3.1 8b. CBT fidelity was evaluated through a modified Cognitive Therapy Rating Scale (CTRS). All fine-tuned models were compared against each other, as well as their instruct-tuned variants. Simulated patient transcripts were generated for the purpose of evaluating model performance, with the instruct and CBT-tuned models acting as the therapist and DeepSeek-V2.5 acting as the patient. These simulated transcripts were evaluated on a modified CTRS by Gemini 1.5 Pro-002. Our findings demonstrated that the CBT-tuned models significantly outperformed their instruct-tuned counterparts, with an average improvement of 11.33 points (p < 0.001) on total CTRS score. Llama 3.1 8b had the strongest performance (mean CTRS score 67.86 +/- 7.24), followed by Qwen 2.5 7b (64.28 +/- 9.55) and Mistral 7b v0.3 (64.17 +/- 9.79), with these differences between models being statistically significant. The CBT-tuned models were competent in implementing core CBT techniques and providing empathetic responses, however, there were limitations observed in agenda adherence, exploration depth and long-context coherence. This study establishes that CBT specific fine-tuning can effectively encode therapeutic competencies in small LLMs, though significant technical and ethical considerations must be resolved prior to clinical deployment.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States (0.04)
- Asia > Middle East > Jordan (0.04)
The Problem With Mental Health Bots
Teresa Berkowitz's experiences with therapists had been hit or miss. "Some good, some helpful, some just a waste of time and money," she says. When some childhood trauma was reactivated six years ago, instead of connecting with a flesh-and-blood human, Berkowitz--who's in her fifties and lives in the US state of Maine--downloaded Youper, a mental health app with a chatbot therapist function powered by artificial intelligence. Once or twice a week Berkowitz does guided journaling using the Youper chatbot, during which the bot prompts her to spot and change negative thinking patterns as she writes down her thoughts. The app, she says, forces her to rethink what's triggering her anxiety.
- North America > United States > Maine (0.26)
- Asia > Singapore (0.17)
- North America > United States > Massachusetts (0.06)
- (2 more...)